13 research outputs found
Color Constancy Convolutional Autoencoder
In this paper, we study the importance of pre-training for the generalization
capability in the color constancy problem. We propose two novel approaches
based on convolutional autoencoders: an unsupervised pre-training algorithm
using a fine-tuned encoder and a semi-supervised pre-training algorithm using a
novel composite-loss function. This enables us to solve the data scarcity
problem and achieve competitive, to the state-of-the-art, results while
requiring much fewer parameters on ColorChecker RECommended dataset. We further
study the over-fitting phenomenon on the recently introduced version of
INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both
field and non-field scenes acquired by three different camera models.Comment: 6 pages, 1 figure, 3 table
Monte Carlo Dropout Ensembles for Robust Illumination Estimation
Computational color constancy is a preprocessing step used in many camera
systems. The main aim is to discount the effect of the illumination on the
colors in the scene and restore the original colors of the objects. Recently,
several deep learning-based approaches have been proposed to solve this problem
and they often led to state-of-the-art performance in terms of average errors.
However, for extreme samples, these methods fail and lead to high errors. In
this paper, we address this limitation by proposing to aggregate different deep
learning methods according to their output uncertainty. We estimate the
relative uncertainty of each approach using Monte Carlo dropout and the final
illumination estimate is obtained as the sum of the different model estimates
weighted by the log-inverse of their corresponding uncertainties. The proposed
framework leads to state-of-the-art performance on INTEL-TAU dataset.Comment: 7 pages,6 figure
Revisiting Gray Pixel for Statistical Illumination Estimation
We present a statistical color constancy method that relies on novel gray
pixel detection and mean shift clustering. The method, called Mean Shifted Grey
Pixel -- MSGP, is based on the observation: true-gray pixels are aligned
towards one single direction. Our solution is compact, easy to compute and
requires no training. Experiments on two real-world benchmarks show that the
proposed approach outperforms state-of-the-art methods in the camera-agnostic
scenario. In the setting where the camera is known, MSGP outperforms all
statistical methods.Comment: updated and will appear in VISSAP 2019 (long paper
Probabilistic Color Constancy
In this paper, we propose a novel unsupervised color constancy method, called
Probabilistic Color Constancy (PCC). We define a framework for estimating the
illumination of a scene by weighting the contribution of different image
regions using a graph-based representation of the image. To estimate the weight
of each (super-)pixel, we rely on two assumptions: (Super-)pixels with similar
colors contribute similarly and darker (super-)pixels contribute less. The
resulting system has one global optimum solution. The proposed method achieves
competitive performance, compared to the state-of-the-art, on INTEL-TAU
dataset.Comment: 5 pages, 1 figur
Bag of Color Features For Color Constancy
In this paper, we propose a novel color constancy approach, called Bag of
Color Features (BoCF), building upon Bag-of-Features pooling. The proposed
method substantially reduces the number of parameters needed for illumination
estimation. At the same time, the proposed method is consistent with the color
constancy assumption stating that global spatial information is not relevant
for illumination estimation and local information ( edges, etc.) is sufficient.
Furthermore, BoCF is consistent with color constancy statistical approaches and
can be interpreted as a learning-based generalization of many statistical
approaches. To further improve the illumination estimation accuracy, we propose
a novel attention mechanism for the BoCF model with two variants based on
self-attention. BoCF approach and its variants achieve competitive, compared to
the state of the art, results while requiring much fewer parameters on three
benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8.Comment: 12 pages, 5 figures, 6 table
Monte Carlo Dropout Ensembles for Robust Illumination Estimation
Computational color constancy is a preprocessing step used in many camera systems. The main aim is to discount the effect of the illumination on the colors in the scene and restore the original colors of the objects. Recently, several deep learning-based approaches have been proposed to solve this problem and they often led to state-of-the-art performance in terms of average errors. However, for extreme samples, these methods fail and lead to high errors. In this paper, we address this limitation by proposing to aggregate different deep learning methods according to their output uncertainty. We estimate the relative uncertainty of each approach using Monte Carlo dropout and the final illumination estimate is obtained as the sum of the different model estimates weighted by the log-inverse of their corresponding uncertainties. The proposed framework leads to state-of-the-art performance on INTEL-TAU dataset.acceptedVersionPeer reviewe